97 - NHR Perflab Seminar 2025-09-02: Compiler Architecture [ID:58692]
50 von 876 angezeigt

Yeah, so I'm going to talk about a thread of work that's been going on in my research

group for 10, 12, 15 years, maybe even, on domain-specific compilers, domain-specific

languages, program generation.

I'll show you what I mean by that.

And I'm really talking about the DSLs, not the ones that help you program for a specific

target hardware, but the ones where you type in the equations you want to solve at the

beginning.

And we have tools that really benefit from the tool chain knowing what the problem you're

solving really is.

So I'll say a lot more about what I mean by that.

This is joint work with a great number of people, and the list on the slide is incomplete.

But I'll show some citations as we go along to some of the papers that I'm going to be

referring to.

So who am I?

What do I do?

Well, I've worked on lots of things, as Gill kindly introduced, and I'm including compilers

for general purpose programming languages.

For example, I'm particularly proud of one of my PhD graduates, David Pearce, who built

a point analysis, which got adopted into GCC and is cited from people from the LLVM and

the Golang compiler and so on.

And that's fantastic, but in fact, the benefits were pretty incremental.

Compilers are not very good at understanding what pointers are doing in programs.

And this is one of the fundamental barriers that limits what optimizations are possible.

And sometimes it's because the analysis is dumb and we've pushed on that a bit.

The truth is that even heroic interprocedural analyses don't get you very far, partly because

the pointer sharing and so on is data dependent and truly is possible that there might be

data that messes with the parallelization strategy.

And sometimes it's even normal.

And yet there remains parallels, for example.

So meanwhile, I met people doing exciting applications and they really, really know.

I mean, one of the things you learn from talking to computational scientists is that very

often they know a lot about how to optimize what they're doing, but it's a huge amount

of work. So I got interested in talking to them about what they were doing and automating

what they told me about how we should solve their problems.

That led to FireDrake, which Giove mentioned, and DeVito.

And I'll tell you about both of those later in today's presentation.

And also peripherally in another tool called PyFR, which is kind of similar.

These are all tools where you type in the partial differential equations you want to solve.

You specify discretization and the tool automates the textbook numerical methods and generates

high performance code.

So we're simultaneously trying to get the benefits of really a very high level abstract

programming model and the performance you get when the tool chain really knows what you're

doing. So hopefully it's a win and a win.

So another thing we tried to do is to put off the same trick in robot vision.

We did lots of fun things with robot vision people.

None of them is a DSL.

We're working on it.

There's a big picture of what we do in my software performance optimization research group.

On the left, we see a bunch of compiler technologies, code synthesis technologies.

On the green, we have application contexts.

Teil einer Videoserie :
Teil eines Kapitels:
NHR@FAU PerfLab Seminar

Zugänglich über

Offener Zugang

Dauer

00:54:21 Min

Aufnahmedatum

2025-09-09

Hochgeladen am

2025-09-18 16:46:03

Sprache

en-US

NHR PerfLab Seminar talk on Sepember 9, 2025
Speaker: Prof. Paul Kelly, Head of Software Performance Optimisation, Imperial College London

Abstract:
Compilers have architecture: the art of compiler design is to find the right representation so that hard optimisation and synthesis problems become easy. This talk will review several projects that motivate, illustrate and explore this idea. With domain-specific languages, we can capture not just the computation, but the mathematics of what is being computed. When targeting accelerators, we design intermediate representations that capture critical computational patterns. Compiler architecture is also software architecture, and I will say a little about our work on MLIR-based tools to support common compiler infrastructure across different DSLs. I’ll also offer some reflections on what we gain from domain-specificity; pointer aliasing is one of the biggest barriers to ambitious optimisation in more general-purpose programs – I will finish with some reflections on how to exploit what we can know in general-purpose code.
 
For a list of past and upcoming NHR PerfLab seminar events, please see: https://hpc.fau.de/research/nhr-perflab-seminar-series/
 

Tags

HPC
Einbetten
Wordpress FAU Plugin
iFrame
Teilen